Goto

Collaborating Authors

 inductive hypothesis









k (s) denote the color ofs in G[1 ] assigned attth iteration,and let

Neural Information Processing Systems

Such a perspective has been explored in Chen et al.[8], for instance, to build an equivalence between function approximation and graph isomorphism testing by GNNs. Forexample, consider FMPNN, the family of all Message Passing Neural Networks onG. Similar holds for the family of allk-Invariant Graph Functions (k-IGNs). Two k-typles, (ii,...,ik),(j1,...,jk) Vk are said to be in the same equivalent class if a permutation π on V such that (π(ii),...,π(ik)) = (j1,...,jk). Thus, both(10) and (11) can be proved analogously to how(11) is provedforcase2.



The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic

Grau, Bernardo Cuenca, Feng, Eva, Wałęga, Przemysław A.

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) address two key challenges in applying deep learning to graph-structured data: they handle varying size input graphs and ensure invariance under graph isomorphism. While GNNs have demonstrated broad applicability, understanding their expressive power remains an important question. In this paper, we propose GNN architectures that correspond precisely to prominent fragments of first-order logic (FO), including various modal logics as well as more expressive two-variable fragments. To establish these results, we apply methods from finite model theory of first-order and modal logics to the domain of graph representation learning. Our results provide a unifying framework for understanding the logical expressiveness of GNNs within FO.